Navigation through doorways using omnivision
by Jef Mangelschots
This are the notes I took during Bruce Weimer's class on "Navigating through doorways using omnivision" at the November 2009 meeting.
Bruce started by pointing that there are 2 good sources for vision software out there: Roborealm and OpenCV. OpenCV has extensive libraries for Eigenvalue calculations, which makes it useful for facial recognition. Roborealm has some good features for landmark detection. It uses primarily checkerboard-type fiducial recognition.
A new challenge posed itself for navigating a Leaf-type robot through a doorway. Leaf robots are big, and usually barely fit through a door frame. While moving through the frame, most sensors become obscured: too close for sonar sensors, forward looking camera loses door while passing through the door, ... Sonars are good for getting you to approximately 4ft of the door, but after that,you have to figure something else out.
He uses visual landmark (fiducials) to get the robot within 3ft of a door frame, but landmark navigation is insufficient to align the robot sufficiently to negotiate the door.
Bruce brought up the use of scanning columate sonar beams (narrow beams). That still doesn't work well enough. IR rangefinder have abetter resolution. Laser rangefinder would be even better, but that is still expensive (Although Bruce is a doctor, he's cheap on his robotics - probably the reason he is still married).
Webcams have tunnel vision problems. They only have about 20 degrees field of view. They become useless to see the surrounding frame.
He then suggest omnivision as a potential solution. Omnivision is the technique to warp a surrounding view into the lens of a webcam using rounded mirrors. Different sizes and shapes come to mind: spheres, cones, parabola, egg shaped,hyperbola, ....
This technique is used in real-estate industry to get surround view images of house interior. These camera systems typically fall outside Bruce's budget (pretty much everything does).
So he rigged up his own omnivision setup using a webcam, a round reflective christmas ornament ball and a camera tripod. He mounts the wecam upwards toward the ceiling, and dangles the christmas ball above it using simple piece of carboard (how cheap can you get):
The following pictures show the resulting view:
This is a result of the spherical transform function of Roborealm. We immediately notice why christmas balls are so cheap. There is a lot of tolerance on the spherical dimensions. Obviously, the manufacturers of these items did not have the robot builder in mind when they designed them.
This is the Prewitt Edge filter:
Then you can apply the Canny edge filter:
The problem is not the lack of filters. The existing filter can do about any image transform you want. The problem is finding the right parameters for every occasion. If you can have a human adjust the parameters for every image the robot takes, you will always find some ideal filtering for further processing. Unfortunately, that is not possible. Lighting conditions vary dramatically over different scenes, times of day, angle of view, ... Here is an example of the same view with different parameters for Canny Edge. Notice that it became useless.
Side fill and Smooth Hull:
and finally Harris Corner:
The blue dots reveal a passageway.
Here is a good article describing the concept.
This link describes omnivision using a rear-view truck mirror.
here is a presentation on robot navigation using omnivision
Omni-vision based autonomous mobile robotic platform
discusion on Roborealm forum about omnivision
This is a source for truck rear-view mirrors.
Here is a link to a potential source of reflective ball.
here is a source for acrylic tubes to mount the sphere above a camera.
McMaster-Carr also carries acrylic tubes
This is a link to the gopano camera to snap 360 degrees photos.